662 research outputs found
MAPTree: Beating "Optimal" Decision Trees with Bayesian Decision Trees
Decision trees remain one of the most popular machine learning models today,
largely due to their out-of-the-box performance and interpretability. In this
work, we present a Bayesian approach to decision tree induction via maximum a
posteriori inference of a posterior distribution over trees. We first
demonstrate a connection between maximum a posteriori inference of decision
trees and AND/OR search. Using this connection, we propose an AND/OR search
algorithm, dubbed MAPTree, which is able to recover the maximum a posteriori
tree. Lastly, we demonstrate the empirical performance of the maximum a
posteriori tree both on synthetic data and in real world settings. On 16 real
world datasets, MAPTree either outperforms baselines or demonstrates comparable
performance but with much smaller trees. On a synthetic dataset, MAPTree also
demonstrates greater robustness to noise and better generalization than
existing approaches. Finally, MAPTree recovers the maxiumum a posteriori tree
faster than existing sampling approaches and, in contrast with those
algorithms, is able to provide a certificate of optimality. The code for our
experiments is available at https://github.com/ThrunGroup/maptree.Comment: 19 page
Bayesian Decision Trees via Tractable Priors and Probabilistic Context-Free Grammars
Decision Trees are some of the most popular machine learning models today due
to their out-of-the-box performance and interpretability. Often, Decision Trees
models are constructed greedily in a top-down fashion via heuristic search
criteria, such as Gini impurity or entropy. However, trees constructed in this
manner are sensitive to minor fluctuations in training data and are prone to
overfitting. In contrast, Bayesian approaches to tree construction formulate
the selection process as a posterior inference problem; such approaches are
more stable and provide greater theoretical guarantees. However, generating
Bayesian Decision Trees usually requires sampling from complex, multimodal
posterior distributions. Current Markov Chain Monte Carlo-based approaches for
sampling Bayesian Decision Trees are prone to mode collapse and long mixing
times, which makes them impractical. In this paper, we propose a new criterion
for training Bayesian Decision Trees. Our criterion gives rise to BCART-PCFG,
which can efficiently sample decision trees from a posterior distribution
across trees given the data and find the maximum a posteriori (MAP) tree.
Learning the posterior and training the sampler can be done in time that is
polynomial in the dataset size. Once the posterior has been learned, trees can
be sampled efficiently (linearly in the number of nodes). At the core of our
method is a reduction of sampling the posterior to sampling a derivation from a
probabilistic context-free grammar. We find that trees sampled via BCART-PCFG
perform comparable to or better than greedily-constructed Decision Trees in
classification accuracy on several datasets. Additionally, the trees sampled
via BCART-PCFG are significantly smaller -- sometimes by as much as 20x.Comment: 10 pages, 1 figur
The use of blog to facilitate clinical learning during practicum among undergraduate nursing students: a preliminary study
Blogging has been increasingly used in the education field. This study is the first to explore the use of blogging to facilitate clinical learning for undergraduate nursing students (N=57) with surface or deep study approaches during their clinical practicum in Hong Kong. The findings indicated that blogging is useful in facilitating clinical learning as it fulfills different learning styles and needs of students by promoting information and knowledge sharing, enhancing self-reflection and providing emotional support.postprintThe 6th International Conference on Knowledge Management (ICKM 2009), Hong Kong, 3-4 December 2009. In Proceedings of ICKM, 2009, p. 1-1
Using blogs to support internship for information management and nursing students
postprintThe 6th International Conference on Knowledge Management (ICKM 2009), Hong Kong, 3-4 December 2009. In Proceedings of the ICKM, 2009, p. 1-1
Harnessing the Power of Choices in Decision Tree Learning
We propose a simple generalization of standard and empirically successful
decision tree learning algorithms such as ID3, C4.5, and CART. These
algorithms, which have been central to machine learning for decades, are greedy
in nature: they grow a decision tree by iteratively splitting on the best
attribute. Our algorithm, Top-, considers the best attributes as
possible splits instead of just the single best attribute. We demonstrate,
theoretically and empirically, the power of this simple generalization. We
first prove a {\sl greediness hierarchy theorem} showing that for every , Top- can be dramatically more powerful than Top-: there
are data distributions for which the former achieves accuracy ,
whereas the latter only achieves accuracy . We then
show, through extensive experiments, that Top- outperforms the two main
approaches to decision tree learning: classic greedy algorithms and more recent
"optimal decision tree" algorithms. On one hand, Top- consistently enjoys
significant accuracy gains over greedy algorithms across a wide range of
benchmarks. On the other hand, Top- is markedly more scalable than optimal
decision tree algorithms and is able to handle dataset and feature set sizes
that remain far beyond the reach of these algorithms.Comment: NeurIPS 202
Maximizing Network Lifetime of Wireless Sensor-Actuator Networks under Graph Routing
Process industries are adopting wireless sensor-actuator networks (WSANs) as the communication infrastructure. The dynamics of industrial environments and stringent reliability requirements necessitate high degrees of fault tolerance in routing. WirelessHART is an open industrial standard for WSANs that have seen world-wide deployments. WirelessHART employs graph routing schemes to achieve network reliability through multiple paths. Since many industrial devices operate on batteries in harsh environments where changing batteries are prohibitively labor-intensive, WSANs need to achieve long network lifetime. To meet industrial demand for long-term reliable communication, this paper studies the problem of maximizing network lifetime for WSANs under graph routing. We formulate the network lifetime maximization problem for WirelessHART networks under graph routing. Then, we propose the optimal algorithm and two more efficient algorithms to prolong the network lifetime of WSANs. Experiments in a physical testbed and simulations show our linear programming relaxation and greedy heuristics can improve the network lifetime by up to 50% while preserving the reliability benefits of graph routing
BanditPAM++: Faster -medoids Clustering
Clustering is a fundamental task in data science with wide-ranging
applications. In -medoids clustering, cluster centers must be actual
datapoints and arbitrary distance metrics may be used; these features allow for
greater interpretability of the cluster centers and the clustering of exotic
objects in -medoids clustering, respectively. -medoids clustering has
recently grown in popularity due to the discovery of more efficient -medoids
algorithms. In particular, recent research has proposed BanditPAM, a randomized
-medoids algorithm with state-of-the-art complexity and clustering accuracy.
In this paper, we present BanditPAM++, which accelerates BanditPAM via two
algorithmic improvements, and is faster than BanditPAM in complexity and
substantially faster than BanditPAM in wall-clock runtime. First, we
demonstrate that BanditPAM has a special structure that allows the reuse of
clustering information each iteration. Second, we demonstrate
that BanditPAM has additional structure that permits the reuse of information
different iterations. These observations inspire our proposed
algorithm, BanditPAM++, which returns the same clustering solutions as
BanditPAM but often several times faster. For example, on the CIFAR10 dataset,
BanditPAM++ returns the same results as BanditPAM but runs over 10
faster. Finally, we provide a high-performance C++ implementation of
BanditPAM++, callable from Python and R, that may be of interest to
practitioners at https://github.com/motiwari/BanditPAM. Auxiliary code to
reproduce all of our experiments via a one-line script is available at
https://github.com/ThrunGroup/BanditPAM_plusplus_experiments.Comment: NeurIPS 202
Faster Maximum Inner Product Search in High Dimensions
Maximum Inner Product Search (MIPS) is a ubiquitous task in machine learning
applications such as recommendation systems. Given a query vector and atom
vectors in -dimensional space, the goal of MIPS is to find the atom that has
the highest inner product with the query vector. Existing MIPS algorithms scale
at least as , which becomes computationally prohibitive in
high-dimensional settings. In this work, we present BanditMIPS, a novel
randomized MIPS algorithm whose complexity is independent of . BanditMIPS
estimates the inner product for each atom by subsampling coordinates and
adaptively evaluates more coordinates for more promising atoms. The specific
adaptive sampling strategy is motivated by multi-armed bandits. We provide
theoretical guarantees that BanditMIPS returns the correct answer with high
probability, while improving the complexity in from to
. We also perform experiments on four synthetic and real-world datasets
and demonstrate that BanditMIPS outperforms prior state-of-the-art algorithms.
For example, in the Movie Lens dataset (=4,000, =6,000), BanditMIPS is
20 faster than the next best algorithm while returning the same answer.
BanditMIPS requires no preprocessing of the data and includes a hyperparameter
that practitioners may use to trade off accuracy and runtime. We also propose a
variant of our algorithm, named BanditMIPS-, which achieves further
speedups by employing non-uniform sampling across coordinates. Finally, we
demonstrate how known preprocessing techniques can be used to further
accelerate BanditMIPS, and discuss applications to Matching Pursuit and Fourier
analysis.Comment: 23 page
Electron-phonon physics from first principles using the EPW code
EPW is an open-source software for calculations of
electron-phonon interactions and related materials properties. The code
combines density functional perturbation theory and maximally-localized Wannier
functions to efficiently compute electron-phonon coupling matrix elements on
ultra-fine Brillouin zone grids. This data is employed for predictive
calculations of temperature-dependent properties and phonon-assisted quantum
processes in bulk solids and low-dimensional materials. Here, we report on
significant new developments in the code that occurred during the period
2016-2022, namely: a transport module for the calculation of charge carrier
mobility and conductivity under electric and magnetic fields within the
Boltzmann transport equation; a superconductivity module
for the calculation of critical temperature and gap structure in
phonon-mediated superconductors within the anisotropic
multi-band Eliashberg theory; an optics module for calculations of
phonon-assisted indirect transitions; a module for the calculation of small and
large polarons without supercells using the polaron
equations; and a module for calculating electron-phonon couplings, band
structure renormalization, and temperature-dependent optical spectra using the
special displacement method. For each capability, we outline the methodology
and implementation, and provide example calculations. We describe recent code
refactoring to prepare EPW for exascale architectures, we discuss efficient
parallelization strategies, and report on extreme parallel scaling tests.Comment: 61 pages, 9 figure
- âŠ